Interpreting Image Classification of Keras InceptionV3

This notebook examines the predictions of an image classfifier (DNN InceptionV3) using two interpretation methods:

  • LimeImageExplainer (LIME)
  • Class Activation Maps (CAM)
  • First, LIME and CAM will be applied on a single prediction to make their concepts easier to understand.
  • In the following section, both methods are systematically applied to a group of predictions and compared using their masks
  • The Images used in this notebook can be downloaded from ImageNet: Download from here
In [1]:
#process not on GPU
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''

Loading packages

In [2]:
# Keras
import keras
from keras.applications.inception_v3 import InceptionV3, preprocess_input, decode_predictions
from keras.preprocessing.image import load_img, img_to_array
# Tools requiered for visualizing and processing
import matplotlib.pyplot as plt
import pandas as pd
In [3]:
model_InceptionV3 = InceptionV3(weights = 'imagenet')
WARNING: Logging before flag parsing goes to stderr.
W0703 09:13:07.187987 139787967186560 deprecation_wrapper.py:119] From /usr/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

W0703 09:13:07.201575 139787967186560 deprecation_wrapper.py:119] From /usr/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

W0703 09:13:07.204673 139787967186560 deprecation_wrapper.py:119] From /usr/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

W0703 09:13:07.222829 139787967186560 deprecation_wrapper.py:119] From /usr/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

W0703 09:13:07.223582 139787967186560 deprecation_wrapper.py:119] From /usr/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

W0703 09:13:07.279345 139787967186560 deprecation_wrapper.py:119] From /usr/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.

W0703 09:13:07.486847 139787967186560 deprecation_wrapper.py:119] From /usr/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

W0703 09:13:08.133399 139787967186560 deprecation_wrapper.py:119] From /usr/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3980: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.

Getting started with a single prediction of the model

In [4]:
# Load image and scale to 299x299
image_raw = load_img("/home/common_files/datasets/imagenet/pictures/n01484850/n01484850_10798.jpg",target_size = (299,299))
image_raw
Out[4]:
In [5]:
# Preprocess image in an array to make it compatible with the Keras model and reshape it.
image = img_to_array(image_raw)
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
image = preprocess_input(image)
In [6]:
# Predict label of image
prediction = model_InceptionV3.predict(image)
pd.DataFrame(decode_predictions(prediction, top=3)[0],columns=['col1','class','probability']).iloc[:,1:]
Out[6]:
class probability
0 great_white_shark 0.938497
1 tiger_shark 0.001046
2 grey_whale 0.000489

Loading LIME

In [7]:
import lime
from lime.lime_image import LimeImageExplainer
# Requiered segmentation package for creating superpixel of image
from skimage.segmentation import mark_boundaries, quickshift
import numpy as np

Create explaination

In [8]:
explainer = LimeImageExplainer()
In [9]:
# This part is optional and is listed to make clear the idea behind LimeImageExplainer. This means Lime does this automatically in the back.
# Lime is creating superpixels using the segmentation algorithm quickshift like below, and predicts the image user-defined-times by turning superpixel on and off

segments = quickshift(image_raw, kernel_size=4, max_dist=200, ratio=0.2,random_seed=42)
superpixel = mark_boundaries(image_raw, segments)
plt.imshow(superpixel)
print('Number of superpixel: ',len(np.unique(segments)))
Number of superpixel:  65
In [10]:
# pass image as (299,299,3)-Array 299x299Pixel in RGB
# top_labels = 3 -> create explanation for top three labels predicted by the model for image
# num_samples = 500 -> create 500 similiar pictures by turning superpixel on and off 
# random_seed = 42 -> for reproducability

explanation = explainer.explain_instance(image[0], model_InceptionV3.predict, top_labels=3, num_samples=500, random_seed=42)
In [11]:
# Show top five (by default) Superpixel with positive weights supported decision making

temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=True,  hide_rest=True)
top_superpixels = mark_boundaries(temp /2 + .5,mask)
plt.imshow(top_superpixels)
Out[11]:
<matplotlib.image.AxesImage at 0x7f21bcc32a58>
In [12]:
# explanation.top_labels[0] -> index of highest probable predicted class
temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False,   hide_rest=False)
top_superpixels = mark_boundaries(temp /2 + .5,mask)
plt.imshow(top_superpixels)
Out[12]:
<matplotlib.image.AxesImage at 0x7f21bcb9d400>
In [13]:
temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=len(np.unique(segments)),hide_rest=False)
top_superpixels = mark_boundaries(temp /2 + .5,mask)
plt.imshow(top_superpixels)
Out[13]:
<matplotlib.image.AxesImage at 0x7f21bcb7a9e8>

Create explanations for a group of predictions

  • We predicted 810 992 images (from the ImageNet database) without retraining the model
  • Result: Accuracy of 86.9 %
  • At the end we created a dataset containing all false predicted images
In [37]:
results_images_df = pd.read_hdf('results-inceptionv3-1.h5',key='pictures')
results_images_df.sort_values(['confidence_difference'], ascending=False)
Out[37]:
label_id label_name1 label_name2 prediction_id prediction_name1 prediction_name2 file_name confidence_in_prediction prediction_in_top_5 confidence_in_label confidence_difference
0 207 n02099601 golden_retriever 971 n09229709 bubble n02099601/n02099601_4933.jpg 1.000000 False 2.533398e-08 1.000000
0 207 n02099601 golden_retriever 475 n02965783 car_mirror n02099601/n02099601_2077.jpg 1.000000 False 4.801222e-11 1.000000
0 263 n02113023 Pembroke 805 n04254680 soccer_ball n02113023/n02113023_8857.jpg 1.000000 False 1.617717e-08 1.000000
0 541 n03249569 drum 822 n04311174 steel_drum n03249569/n03249569_14988.jpg 1.000000 False 2.191712e-08 1.000000
0 880 n04509417 unicycle 475 n02965783 car_mirror n04509417/n04509417_1366.jpg 1.000000 False 5.660312e-10 1.000000
0 219 n02102318 cocker_spaniel 475 n02965783 car_mirror n02102318/n02102318_3219.jpg 1.000000 False 1.091116e-12 1.000000
0 615 n03623198 knee_pad 523 n03141823 crutch n03623198/n03623198_11556.jpg 1.000000 False 1.435285e-07 1.000000
0 159 n02087394 Rhodesian_ridgeback 475 n02965783 car_mirror n02087394/n02087394_9296.jpg 0.999999 False 1.916599e-10 0.999999
0 441 n02823750 beer_glass 736 n03982430 pool_table n02823750/n02823750_3697.jpg 0.999999 False 7.516262e-09 0.999999
0 270 n02114548 white_wolf 489 n03000134 chainlink_fence n02114548/n02114548_7952.jpg 0.999998 False 1.171497e-09 0.999998
0 652 n03763968 military_uniform 439 n02817516 bearskin n03763968/n03763968_6928.jpg 0.999998 False 2.249379e-07 0.999998
0 212 n02100735 English_setter 805 n04254680 soccer_ball n02100735/n02100735_3463.jpg 0.999999 False 1.240272e-06 0.999997
0 777 n04141327 scabbard 439 n02817516 bearskin n04141327/n04141327_9738.jpg 0.999998 False 1.014994e-06 0.999997
0 208 n02099712 Labrador_retriever 475 n02965783 car_mirror n02099712/n02099712_4740.jpg 0.999997 False 1.381732e-07 0.999996
0 874 n04487081 trolleybus 489 n03000134 chainlink_fence n04487081/n04487081_20135.jpg 0.999995 False 7.940290e-08 0.999995
0 698 n03877845 palace 439 n02817516 bearskin n03877845/n03877845_4810.jpg 0.999995 False 3.282633e-07 0.999995
0 767 n04116512 rubber_eraser 918 n06785654 crossword_puzzle n04116512/n04116512_25502.jpg 0.999995 False 4.554849e-07 0.999995
0 232 n02106166 Border_collie 805 n04254680 soccer_ball n02106166/n02106166_5249.jpg 0.999993 False 1.289811e-06 0.999992
0 796 n04229816 ski_mask 594 n03495258 harp n04229816/n04229816_2795.jpg 0.999991 False 2.929295e-07 0.999990
0 587 n03481172 hammer 652 n03763968 military_uniform n03481172/n03481172_12416.jpg 0.999991 False 3.176506e-06 0.999988
0 815 n04275548 spider_web 489 n03000134 chainlink_fence n04275548/n04275548_1389.jpg 0.999985 False 2.008315e-09 0.999985
0 162 n02088364 beagle 475 n02965783 car_mirror n02088364/n02088364_3504.jpg 0.999983 False 1.400137e-06 0.999982
0 204 n02098413 Lhasa 475 n02965783 car_mirror n02098413/n02098413_1614.jpg 0.999989 False 6.725060e-06 0.999982
0 938 n07715103 cauliflower 937 n07714990 broccoli n07715103/n07715103_3870.jpg 0.999984 False 3.816122e-06 0.999981
0 584 n03476684 hair_slide 823 n04317175 stethoscope n03476684/n03476684_10326.jpg 0.999980 False 2.773220e-07 0.999980
0 473 n02951585 can_opener 512 n03109150 corkscrew n02951585/n02951585_10256.jpg 0.999987 False 9.340505e-06 0.999978
0 830 n04336792 stretcher 678 n03814639 neck_brace n04336792/n04336792_177.jpg 0.999987 False 9.888779e-06 0.999977
0 981 n09835506 ballplayer 489 n03000134 chainlink_fence n09835506/n09835506_9604.jpg 0.999973 False 1.487881e-06 0.999972
0 652 n03763968 military_uniform 439 n02817516 bearskin n03763968/n03763968_5955.jpg 0.999971 False 6.417376e-07 0.999970
0 837 n04356056 sunglasses 785 n04162706 seat_belt n04356056/n04356056_9399.jpg 0.999978 False 1.304429e-05 0.999965
... ... ... ... ... ... ... ... ... ... ... ...
0 442 n02825657 bell_cote 698 n03877845 palace n02825657/n02825657_1377.jpg 0.153046 False 1.528487e-01 0.000197
0 177 n02092002 Scottish_deerhound 181 n02093647 Bedlington_terrier n02092002/n02092002_9707.jpg 0.455542 False 4.553869e-01 0.000156
0 414 n02769748 backpack 748 n04026417 purse n02769748/n02769748_12727.jpg 0.103284 False 1.031333e-01 0.000150
0 398 n02666196 abacus 508 n03085013 computer_keyboard n02666196/n02666196_13452.jpg 0.103876 False 1.037424e-01 0.000134
0 867 n04467665 trailer_truck 675 n03796401 moving_van n04467665/n04467665_3007.jpg 0.436308 False 4.361764e-01 0.000132
0 799 n04239074 sliding_door 756 n04049303 rain_barrel n04239074/n04239074_1253.jpg 0.062258 False 6.213059e-02 0.000127
0 897 n04554684 washer 412 n02747177 ashcan n04554684/n04554684_6136.jpg 0.145841 False 1.457146e-01 0.000126
0 860 n04443257 tobacco_shop 865 n04462240 toyshop n04443257/n04443257_20607.jpg 0.225237 False 2.251114e-01 0.000126
0 514 n03124043 cowboy_boot 885 n04525038 velvet n03124043/n03124043_1804.jpg 0.157008 False 1.569017e-01 0.000106
0 40 n01682714 American_chameleon 46 n01693334 green_lizard n01682714/n01682714_16348.jpg 0.449479 False 4.493797e-01 0.000099
0 248 n02109961 Eskimo_dog 172 n02091134 whippet n02109961/n02109961_5020.jpg 0.051372 False 5.127463e-02 0.000098
0 619 n03637318 lampshade 846 n04380533 table_lamp n03637318/n03637318_32996.jpg 0.486030 False 4.859447e-01 0.000086
0 999 n15075141 toilet_tissue 473 n02951585 can_opener n15075141/n15075141_6995.jpg 0.217353 False 2.172691e-01 0.000084
0 124 n01985128 crayfish 125 n01986214 hermit_crab n01985128/n01985128_13074.jpg 0.324877 False 3.248018e-01 0.000075
0 365 n02480495 orangutan 51 n01704323 triceratops n02480495/n02480495_377.jpg 0.046445 False 4.637058e-02 0.000074
0 46 n01693334 green_lizard 40 n01682714 American_chameleon n01693334/n01693334_3799.jpg 0.413724 False 4.136513e-01 0.000072
0 319 n02268443 dragonfly 320 n02268853 damselfly n02268443/n02268443_46580.jpg 0.487261 False 4.871902e-01 0.000071
0 871 n04483307 trimaran 621 n03649909 lawn_mower n04483307/n04483307_10908.jpg 0.133778 False 1.337191e-01 0.000059
0 40 n01682714 American_chameleon 46 n01693334 green_lizard n01682714/n01682714_3536.jpg 0.353491 False 3.534406e-01 0.000051
0 179 n02093256 Staffordshire_bullterrier 180 n02093428 American_Staffordshire_terrier n02093256/n02093256_1934.jpg 0.407620 False 4.075701e-01 0.000050
0 208 n02099712 Labrador_retriever 348 n02412080 ram n02099712/n02099712_4940.jpg 0.060132 False 6.008415e-02 0.000048
0 116 n01955084 chiton 327 n02317335 starfish n01955084/n01955084_4388.jpg 0.203223 False 2.031918e-01 0.000032
0 808 n04259630 sombrero 515 n03124170 cowboy_hat n04259630/n04259630_4303.jpg 0.278383 False 2.783532e-01 0.000030
0 861 n04447861 toilet_seat 731 n03970156 plunger n04447861/n04447861_3344.jpg 0.090458 False 9.042852e-02 0.000029
0 558 n03372029 flute 683 n03838899 oboe n03372029/n03372029_29249.jpg 0.383549 False 3.835292e-01 0.000020
0 750 n04033995 quilt 564 n03388549 four-poster n04033995/n04033995_10153.jpg 0.392732 False 3.927192e-01 0.000013
0 769 n04118776 rule 807 n04258138 solar_dish n04118776/n04118776_70444.jpg 0.188122 False 1.881120e-01 0.000010
0 518 n03127747 crash_helmet 796 n04229816 ski_mask n03127747/n03127747_7206.jpg 0.463441 False 4.634358e-01 0.000005
0 666 n03786901 mortar 762 n04081281 restaurant n03786901/n03786901_11507.jpg 0.106182 False 1.061794e-01 0.000002
0 987 n12144580 corn 828 n04332243 strainer n12144580/n12144580_2468.jpg 0.106642 False 1.066394e-01 0.000002

106126 rows × 11 columns

In [14]:
%%html
<style>
table {float:left}
</style>

Columns of dataset explained

Column Explanation
label_id index of actual label
label_name1 encoded actual label
label_name2 decoded actual label
prediction_id index of predicted label
prediction_name1 encoded predicted label
prediction_name2 decoded predicted label
file_name directory of image
confidence_in_prediction probability for predicted label by model
prediction_in_top_5 True, if actual label is in top five of predicted classes, otherwise False
confidence_in_label probability for actual label predicted by model

confidence_difference |confidence_in_prediction - confidence_in_label|

Define methods for codes above

In [15]:
# To load images
def get_image(image_dir):
    image = load_img("/home/common_files/datasets/imagenet/pictures/"+image_dir,target_size = (299,299))
    return image    
In [16]:
# Preprocess image in an array to make it compatible with the Keras model and reshape it. 
def preprocess_image(image_raw):
    image = img_to_array(image_raw)
    image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
    image = preprocess_input(image)
    return image[0]
In [17]:
# Create explanation of Image
def explanation_for_instance(image,model):
    explanation = explainer.explain_instance(image,model.predict,top_labels = 1,num_samples=500,random_seed=42)
    return explanation
In [18]:
# Mask explanation and return top ten superpixel for most probable prediction of model and hide rest of image
def get_explained_image(explanation, positive_only, hide_rest):
    temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only = positive_only, num_features=10, hide_rest=hide_rest)
    bounderies = mark_boundaries(temp /2 + .5,mask)
    return bounderies

Create explanations in loop

  • We want to create explanations for only images where confidence_difference is higher than 0.98 since we have more than 100 000 false predictions
  • And it takes to much time generate an explanation for an image
In [34]:
# Order dataframe by confidence_difference descending to make iteration more efficent
results_images_df = results_images_df.sort_values(['confidence_difference'], ascending=False)

# Create explanation in loop

for i in range(len(results_images_df)):
    
    file_name = results_images_df.iloc[i]['file_name']
    image_raw = get_image(file_name)
    image = preprocess_image(image_raw)
    # Create explanation for predicted label
    explanation = explanation_for_instance(image, model_InceptionV3)
    # get explained image with 10 positive weighted superpixel and hide rest
    explained_image_positive_and_hidden = get_explained_image(explanation, positive_only = True, hide_rest= True)
    # get explained image with 10 positive or negative weigthed superpixel, hide_rest = False
    explained_image_pos_neg = get_explained_image(explanation, positive_only = False, hide_rest= False)
    
    #plt.savefig('../Output_Images/Predicted_ {}, Actual_ {}'.format(results_images_df.iloc[i]['prediction_name2'],results_images_df.iloc[i]['label_name2']))
    
    #plot only first 10
    if i < 10:
        
        fig, ax = plt.subplots(1,3, figsize = (15,15))
        ax[0].imshow(image_raw)
        ax[0].set_title('Predicted: {}, Actual: {}'.format(results_images_df.iloc[i]['prediction_name2'],results_images_df.iloc[i]['label_name2']), fontsize =12)
        ax[1].imshow(explained_image_positive_and_hidden)
        ax[1].set_title('10 most positive superpixel, rest hidden')
        ax[2].imshow(explained_image_pos_neg)
        ax[2].set_title('Positive and negative weighted superpixels')
        plt.show()
    
    if results_images_df.iloc[i]['confidence_difference'] < 0.98:
        break
    
    if i >=10:
        break

LIME vs. Class Activation Maps

  • In this section Class Activation Maps are generated and compared with the results of the LimeImageExplainer.

  • Class Activation Maps are simple to implement and can help to get the discriminative image regions used by a CNN to identify a specific class in the image.

  • Class Activation Maps are often called as Attention Maps

Get Class Activation Map for a single prediction

In [19]:
# import keras-vis
import vis
from vis.utils import utils
from vis.visualization import visualize_saliency, overlay, visualize_cam,visualize_cam_with_losses
from keras import activations
import os
In [20]:
image_raw = load_img("/home/common_files/datasets/imagenet/pictures/n01910747/n01910747_15064.jpg",target_size = (299,299))
plt.imshow(image_raw)

# n01910747_13855.jpg
#n01484850/n01484850_10798.jpg
# n01560419/n01560419_1316.jpg
Out[20]:
<matplotlib.image.AxesImage at 0x7f21bca855c0>
In [20]:
def cam(img_path):
    import keras
    import numpy as np
    import pandas as pd
    from keras.applications.inception_v3 import InceptionV3
    import matplotlib.image as mpimg
    from keras import backend as K
    import matplotlib.pyplot as plt
    %matplotlib inline
    K.clear_session()
    print('here_1')
    model = InceptionV3()
    img=mpimg.imread(img_path)
    plt.imshow(img)
    from keras.preprocessing import image
    img = image.load_img(img_path, target_size=(299, 299))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    
    print('here_2')
    from keras.applications.inception_v3 import preprocess_input
    x = preprocess_input(x)
    preds = model.predict(x)
    predictions = pd.DataFrame(decode_predictions(preds, top=3)[0],columns=['col1','category','probability']).iloc[:,1:]
    print('decoded predictions',predictions)
    argmax = np.argmax(preds[0])
    print('argmax: ', argmax)
    output = model.output[:, argmax]
    last_conv_layer = model.get_layer('conv2d_94')
    grads = K.gradients(output, last_conv_layer.output)[0]
    pooled_grads = K.mean(grads, axis=(0, 1, 2))
    iterate = K.function([model.input], [pooled_grads, last_conv_layer.output[0]])
    pooled_grads_value, conv_layer_output_value = iterate([x])
    print('here_3')
    for i in range(192):
        conv_layer_output_value[:, :, i] *= pooled_grads_value[i]
    heatmap = np.mean(conv_layer_output_value, axis=-1)
    heatmap = np.maximum(heatmap, 0)
    heatmap /= np.max(heatmap)
    import cv2
    img = cv2.imread(img_path)
    heatmap = cv2.resize(heatmap, (img.shape[1], img.shape[0]))
    heatmap = np.uint8(255 * heatmap)
    heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
    hif = .8
    superimposed_img = heatmap * hif + img
    output = 'test.jpeg'
    cv2.imwrite(output, superimposed_img)
    img=mpimg.imread(output)
    plt.imshow(img)
    plt.axis('off')
    plt.title(predictions.loc[0,'category'].upper())
    return None
In [30]:
cam('/home/common_files/datasets/imagenet/pictures/n01910747/n01910747_15064.jpg')
here_1
here_2
decoded predictions                       category  probability
0                    jellyfish     0.916035
1                  tiger_shark     0.000732
2  West_Highland_white_terrier     0.000688
argmax:  107
here_3
In [21]:
#Preprocess input
image = img_to_array(image_raw)
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
image = preprocess_input(image)
In [22]:
# predict
prediction = model_InceptionV3.predict(image)
print(pd.DataFrame(decode_predictions(prediction, top=3)[0],columns=['col1','class','probability']).iloc[:,1:])
# get corrosponding imagenet index for prediction
argmax = np.argmax(prediction[0])
print('\nImageNet Index for prediction: ',argmax)
                         class  probability
0                    jellyfish     0.916035
1                  tiger_shark     0.000732
2  West_Highland_white_terrier     0.000688

ImageNet Index for prediction:  107
In [23]:
# get last layer
prediction_layer = model_InceptionV3.get_layer('predictions')
print(prediction_layer)
<keras.layers.core.Dense object at 0x7f21ce3ef2e8>
In [24]:
# get index of last layer
#from keras import activations
prediction_layer_idx = utils.find_layer_idx(model_InceptionV3, 'predictions')
print(prediction_layer_idx)
#model_InceptionV3.layers[prediction_layer_idx].activation = activations.linear
#model_InceptionV3 = utils.apply_modifications(model_InceptionV3)
312
In [25]:
# filter_indices is the corresponding label index for great_white_share
cam = visualize_cam(model=model_InceptionV3, layer_idx=prediction_layer_idx, filter_indices=argmax, seed_input=image)#, backprop_modifier='relu', grad_modifier='absolute')
plt.imshow(cam)
cam.shape
Out[25]:
(299, 299, 3)
In [26]:
plt.imshow(overlay(np.array(image_raw),cam, alpha=0.4))
Out[26]:
<matplotlib.image.AxesImage at 0x7f21bc937f98>
In [27]:
# Get the red area of the attention_map (most positive area for classification of cnn)
# initialize 3D-array of zeros 299x299Pixel RGB
cam_mask = np.zeros((299,299))

# fill array, where blue channel is 0
mask = cam[:,:,2] == 0

# Set to 1 the pixels where mask is True
cam_mask[mask] = 1
plt.imshow(cam_mask)
Out[27]:
<matplotlib.image.AxesImage at 0x7f21bc8ad978>
In [28]:
# get lime mask for top 5 superpixel of actual label

#first create explanation the image index of great white shark (labels = (2,))
explanation = explainer.explain_instance(image[0], model_InceptionV3.predict,num_samples = 500, labels = (argmax,), top_labels = None)
temp, lime_mask = explanation.get_image_and_mask(argmax, positive_only = True,  hide_rest = True)
boundaries = mark_boundaries(temp / 2 + .5, lime_mask)
plt.imshow(lime_mask)
Out[28]:
<matplotlib.image.AxesImage at 0x7f21bc86eb38>

Compare both masks

Intersection_LIME_CAM

In [29]:
# Compare both masks
masks = np.zeros((299,299,3))
masks[:,:,0] = cam_mask
masks[:,:,1] = lime_mask
plt.imshow(masks)
Out[29]:
<matplotlib.image.AxesImage at 0x7f21bc7d2ac8>
In [30]:
# Pixel in intersection (Yellow)
pixel_intersection = ((lime_mask + cam_mask) == 2).sum()
# Totalpixel_attentionMap without intersection
pixel_cam = (cam_mask == 1).sum() - pixel_intersection 
# Totalpixel Lime without intersection
pixel_lime = (lime_mask == 1 ).sum() - pixel_intersection

# Get match in percentage (Yellow)
match_in_percent = pixel_intersection / (pixel_lime + pixel_cam + pixel_intersection)

fig, ax = plt.subplots(1,4, figsize = (15,15))

ax[0].imshow(image_raw)
ax[0].set_title('Original Image')
ax[1].imshow(boundaries)
ax[1].set_title('Lime explanation')
ax[2].imshow(overlay(np.array(image_raw),cam, alpha=0.5))
ax[2].set_title('Class Activation Map')
ax[3].imshow(masks)
ax[3].set_title('Overlay of LIME & CAM')


plt.show()
print('Overlay of LIME and Class Activation Map:\n')
print('Pixel in intersection (Yellow): {} \nPixel only in CAM (Red): \t{} \nPixel only in LIME (Green): \t{} \nMatch in percentage (Yellow): \t{}'.format(pixel_intersection,pixel_cam, pixel_lime,match_in_percent))
Overlay of LIME and Class Activation Map:

Pixel in intersection (Yellow): 4869 
Pixel only in CAM (Red): 	14997 
Pixel only in LIME (Green): 	2142 
Match in percentage (Yellow): 	0.22123773173391495

Get code above in methods and create explanations for group of predictions

In [39]:
plt.imshow(explanation.segments)
Out[39]:
<matplotlib.image.AxesImage at 0x7f0036a91710>
In [32]:
def get_mask_lime(path,image_index):
    #Preprocess input
    image_raw = load_img(path,target_size = (299,299))
    image = img_to_array(image_raw)
    image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
    image = preprocess_input(image)
    
    explanation = explainer.explain_instance(image[0], model_InceptionV3.predict,num_samples = 500, labels = (image_index,), top_labels = None)
    temp, lime_mask = explanation.get_image_and_mask(image_index, positive_only = True,  hide_rest = True)
    boundaries = mark_boundaries(temp / 2 + .5, lime_mask)
    #plt.imshow(lime_mask)
    
    return lime_mask, boundaries
In [33]:
def get_mask_cam(path,image_index):
    
    image_raw = load_img(path,target_size = (299,299))
    # Preprocess image in an array to make it compatible with the Keras model and reshape it.
    image = img_to_array(image_raw)
    image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
    image = preprocess_input(image)
    
    cam_mask = np.zeros((299,299))
    cam = visualize_cam(model_InceptionV3, layer_idx=prediction_layer_idx, filter_indices=image_index, seed_input=image)
    # fill array, where blue channel is 0
    mask = cam[:,:,2] == 0
    # Set to 1 the pixels where mask is True
    cam_mask[mask] = 1
    
    return cam_mask, cam
In [34]:
def compare_masks(lime_mask,cam_mask):
    # Pixel in intersection (Yellow)
    # Compare both masks
    masks = np.zeros((299,299,3))
    masks[:,:,0] = cam_mask
    masks[:,:,1] = lime_mask

    pixel_intersection = ((lime_mask + cam_mask) == 2).sum()
    # Totalpixel_attentionMap without intersection
    pixel_cam = (cam_mask == 1).sum() - pixel_intersection 
    # Totalpixel Lime without intersection
    pixel_lime = (lime_mask == 1 ).sum() - pixel_intersection
    # Get match in percentage (Yellow)
    match_in_percent = pixel_intersection / (pixel_lime + pixel_cam + pixel_intersection)

    return masks,pixel_intersection,pixel_lime,pixel_cam, match_in_percent
In [35]:
import json
classJSON = "/home/viscif/.keras/models/imagenet_class_index.json"

with open(classJSON) as json_file:
    label_list = json.load(json_file)
invers_label_list = {}
for class_ID, value in label_list.items():
    invers_label_list[value[0]] = int(class_ID)
In [36]:
def get_prediction(path):
    image_raw = load_img(path, target_size = (299,299))
    image = img_to_array(image_raw)
    image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
    image = preprocess_input(image)
    prediction = model_InceptionV3.predict(image)
    predicted_label = decode_predictions(prediction)[0][0][0]
    predicted_label_name = decode_predictions(prediction)[0][0][1]
    return prediction[0], predicted_label, predicted_label_name
In [37]:
import os
base_path = '/home/common_files/datasets/imagenet/pictures'
resultsDF = pd.DataFrame()
i = 0
for dir_, subfolder, file_name in os.walk(base_path):
    
    for class_name in subfolder:
        class_paths = os.path.join(dir_,class_name) 

        for dir_, s, file_name in os.walk(class_paths):

            for file in file_name:
                i += 1
                #print('file: ', file)
                path_to_image = os.path.join(dir_, file)
                image_raw = load_img(path_to_image, target_size=(299,299))
                image_index = invers_label_list[class_name]
                actual_label_name = label_list[str(image_index)][1]
                lime_mask, boundaries = get_mask_lime(path_to_image, image_index)
                cam_mask, cam_map = get_mask_cam(path_to_image, image_index)
                both_masks, pixel_intersection, pixel_lime, pixel_cam, match_in_percent = compare_masks(lime_mask, cam_mask)
                prediction,predicted_label,predicted_label_name = get_prediction(path_to_image)
                
                data = {
                    
                    'lime_mask' : [lime_mask],
                    'cam_mask' : [cam_mask],
                    'file_name': [file],
                    'image_class_ID' : [image_index],
                    'percentage_lime_in_cam' : [match_in_percent*100],
                    'pixel_intersection_lime_cam' : [pixel_intersection],
                    'pixel_only_lime' : [pixel_lime],
                    'pixel_only_lime' : [pixel_cam],
                    'actual_label' : [class_name],
                    'actual_label_name' : [actual_label_name],
                    'predicted_label': [predicted_label],
                    'predicted_label_name' : [predicted_label_name],
                    'prediction' : [prediction]
                    
                }
                
                resultsDF = resultsDF.append(pd.DataFrame(data),ignore_index = False)
                #plot only first 10
                if i <= 10:
                    fig, ax = plt.subplots(1,4, figsize = (15,15))
                    ax[0].imshow(image_raw)
                    ax[0].set_title('Original Image')
                    ax[1].imshow(boundaries)
                    ax[1].set_title('Lime explanation for actual label')
                    ax[2].imshow(overlay(np.array(image_raw),cam_map, alpha=0.4))
                    ax[2].set_title('Class Activation Map')
                    ax[3].imshow(both_masks)
                    ax[3].set_title('Overlay of LIME & CAM')
                    plt.show()
                    
                    print('Overlay of LIME and Class Activation Map:\n')
                    print('Predicted label: \t\t{} \nActual label: \t\t\t{} \nPixel in intersection (Yellow): {} \nPixel only in CAM (Red): \t{} \nPixel only in LIME (Green): \t{} \nMatch in percentage (Yellow): \t{}'.
                          format(predicted_label_name,actual_label_name,pixel_intersection,pixel_cam, pixel_lime,match_in_percent))
                if i == 11:
                    break
        resultsDF.to_hdf('results_lime_cam.h5', key = class_name)
Overlay of LIME and Class Activation Map:

Predicted label: 		sulphur_butterfly 
Actual label: 			sulphur_butterfly 
Pixel in intersection (Yellow): 322 
Pixel only in CAM (Red): 	9209 
Pixel only in LIME (Green): 	7286 
Match in percentage (Yellow): 	0.019147291431289765
Overlay of LIME and Class Activation Map:

Predicted label: 		sulphur_butterfly 
Actual label: 			sulphur_butterfly 
Pixel in intersection (Yellow): 2566 
Pixel only in CAM (Red): 	9845 
Pixel only in LIME (Green): 	6038 
Match in percentage (Yellow): 	0.13908612932950296
Overlay of LIME and Class Activation Map:

Predicted label: 		sulphur_butterfly 
Actual label: 			sulphur_butterfly 
Pixel in intersection (Yellow): 4360 
Pixel only in CAM (Red): 	3252 
Pixel only in LIME (Green): 	1823 
Match in percentage (Yellow): 	0.4621091679915209
Overlay of LIME and Class Activation Map:

Predicted label: 		sulphur_butterfly 
Actual label: 			sulphur_butterfly 
Pixel in intersection (Yellow): 4538 
Pixel only in CAM (Red): 	3997 
Pixel only in LIME (Green): 	3583 
Match in percentage (Yellow): 	0.37448423832315564
Overlay of LIME and Class Activation Map:

Predicted label: 		sulphur_butterfly 
Actual label: 			sulphur_butterfly 
Pixel in intersection (Yellow): 1475 
Pixel only in CAM (Red): 	6520 
Pixel only in LIME (Green): 	12636 
Match in percentage (Yellow): 	0.07149435315786923
Overlay of LIME and Class Activation Map:

Predicted label: 		coral_fungus 
Actual label: 			sulphur_butterfly 
Pixel in intersection (Yellow): 0 
Pixel only in CAM (Red): 	0 
Pixel only in LIME (Green): 	7477 
Match in percentage (Yellow): 	0.0
Overlay of LIME and Class Activation Map:

Predicted label: 		sulphur_butterfly 
Actual label: 			sulphur_butterfly 
Pixel in intersection (Yellow): 5395 
Pixel only in CAM (Red): 	2051 
Pixel only in LIME (Green): 	2675 
Match in percentage (Yellow): 	0.5330500938642426
Overlay of LIME and Class Activation Map:

Predicted label: 		sulphur_butterfly 
Actual label: 			sulphur_butterfly 
Pixel in intersection (Yellow): 5246 
Pixel only in CAM (Red): 	6421 
Pixel only in LIME (Green): 	1508 
Match in percentage (Yellow): 	0.39817836812144214
Overlay of LIME and Class Activation Map:

Predicted label: 		sulphur_butterfly 
Actual label: 			sulphur_butterfly 
Pixel in intersection (Yellow): 29 
Pixel only in CAM (Red): 	10564 
Pixel only in LIME (Green): 	11978 
Match in percentage (Yellow): 	0.0012848345221744717
Overlay of LIME and Class Activation Map:

Predicted label: 		sulphur_butterfly 
Actual label: 			sulphur_butterfly 
Pixel in intersection (Yellow): 4298 
Pixel only in CAM (Red): 	2951 
Pixel only in LIME (Green): 	1745 
Match in percentage (Yellow): 	0.4778741383144318
/usr/lib/python3.7/site-packages/pandas/core/generic.py:2377: PerformanceWarning: 
your performance may suffer as PyTables will pickle object types that it cannot
map directly to c-types [inferred_type->mixed,key->block2_values] [items->['lime_mask', 'cam_mask', 'file_name', 'actual_label', 'actual_label_name', 'predicted_label', 'predicted_label_name', 'prediction']]

  return pytables.to_hdf(path_or_buf, key, self, **kwargs)
In [ ]: